密切的人类机器人互动(HRI),尤其是在工业场景中,已经对结合人类和机器人技能的优势进行了广泛的研究。对于有效的HRI,应质疑当前可用的人机通信媒体或工具的有效性,并应探讨新的交流方式。本文提出了一个模块化体系结构,允许人类操作员通过不同的方式与机器人互动。特别是,我们使用智能手表和平板电脑分别实施了架构来分别处理手势和触摸屏输入。最后,我们在这两种方式之间进行了比较用户体验研究。
translated by 谷歌翻译
人类机器人协作组装系统提高了工作场所的效率和生产力,但可能会增加工人的认知需求。本文提出了一个在线和定量框架,以评估与同事的互动,即人类运营商或具有不同控制策略的工业协作机器人所引起的认知工作量。该方法可以监视操作员的注意力分布和上身运动学,从而受益于低成本立体声摄像机和尖端的人工智能算法的输入图像(即头姿势估计和骨架跟踪)。三种实验场景具有工作站特征和互动方式的变化,旨在测试我们在线方法的性能,以防止最新的离线测量。结果证明,我们基于视觉的认知负荷评估有可能将其集成到新一代的协作机器人技术中。后者将使人类的认知状态监测和机器人控制策略适应改善人类舒适,人体工程学和对自动化的信任。
translated by 谷歌翻译
在混合工业环境中,工人对安全的舒适和积极的看法是成功接受和使用协作机器人的重要要求。本文提出了一个新型的人类机器人互动框架,其中根据操作员的认知工作量和压力在线对机器人行为进行了调整。该方法利用在关节空间中生成B-Spline轨迹的生成,并将多目标优化问题的公式用于在线调整机器人轨迹的总执行时间和平滑度。前者确保了工作场所的人力效率和生产力,而后者则有助于保护用户的舒适性和认知人体工程学。在典型的工业任务中评估了拟议框架的性能。结果表明,其能力可以提高人类二元组的生产率,同时减轻工人引起的认知工作量。
translated by 谷歌翻译
Data scarcity is one of the main issues with the end-to-end approach for Speech Translation, as compared to the cascaded one. Although most data resources for Speech Translation are originally document-level, they offer a sentence-level view, which can be directly used during training. But this sentence-level view is single and static, potentially limiting the utility of the data. Our proposed data augmentation method SegAugment challenges this idea and aims to increase data availability by providing multiple alternative sentence-level views of a dataset. Our method heavily relies on an Audio Segmentation system to re-segment the speech of each document, after which we obtain the target text with alignment methods. The Audio Segmentation system can be parameterized with different length constraints, thus giving us access to multiple and diverse sentence-level views for each document. Experiments in MuST-C show consistent gains across 8 language pairs, with an average increase of 2.2 BLEU points, and up to 4.7 BLEU for lower-resource scenarios in mTEDx. Additionally, we find that SegAugment is also applicable to purely sentence-level data, as in CoVoST, and that it enables Speech Translation models to completely close the gap between the gold and automatic segmentation at inference time.
translated by 谷歌翻译
While the problem of hallucinations in neural machine translation has long been recognized, so far the progress on its alleviation is very little. Indeed, recently it turned out that without artificially encouraging models to hallucinate, previously existing methods fall short and even the standard sequence log-probability is more informative. It means that characteristics internal to the model can give much more information than we expect, and before using external models and measures, we first need to ask: how far can we go if we use nothing but the translation model itself ? We propose to use a method that evaluates the percentage of the source contribution to a generated translation. Intuitively, hallucinations are translations "detached" from the source, hence they can be identified by low source contribution. This method improves detection accuracy for the most severe hallucinations by a factor of 2 and is able to alleviate hallucinations at test time on par with the previous best approach that relies on external models. Next, if we move away from internal model characteristics and allow external tools, we show that using sentence similarity from cross-lingual embeddings further improves these results.
translated by 谷歌翻译
End-to-End speech-to-speech translation (S2ST) is generally evaluated with text-based metrics. This means that generated speech has to be automatically transcribed, making the evaluation dependent on the availability and quality of automatic speech recognition (ASR) systems. In this paper, we propose a text-free evaluation metric for end-to-end S2ST, named BLASER, to avoid the dependency on ASR systems. BLASER leverages a multilingual multimodal encoder to directly encode the speech segments for source input, translation output and reference into a shared embedding space and computes a score of the translation quality that can be used as a proxy to human evaluation. To evaluate our approach, we construct training and evaluation sets from more than 40k human annotations covering seven language directions. The best results of BLASER are achieved by training with supervision from human rating scores. We show that when evaluated at the sentence level, BLASER correlates significantly better with human judgment compared to ASR-dependent metrics including ASR-SENTBLEU in all translation directions and ASR-COMET in five of them. Our analysis shows combining speech and text as inputs to BLASER does not increase the correlation with human scores, but best correlations are achieved when using speech, which motivates the goal of our research. Moreover, we show that using ASR for references is detrimental for text-based metrics.
translated by 谷歌翻译
This report summarises the outcomes of a systematic literature search to identify Bayesian network models used to support decision making in healthcare. After describing the search methodology, the selected research papers are briefly reviewed, with the view to identify publicly available models and datasets that are well suited to analysis using the causal interventional analysis software tool developed in Wang B, Lyle C, Kwiatkowska M (2021). Finally, an experimental evaluation of applying the software on a selection of models is carried out and preliminary results are reported.
translated by 谷歌翻译
We consider the problem of decision-making under uncertainty in an environment with safety constraints. Many business and industrial applications rely on real-time optimization with changing inputs to improve key performance indicators. In the case of unknown environmental characteristics, real-time optimization becomes challenging, particularly for the satisfaction of safety constraints. We propose the ARTEO algorithm, where we cast multi-armed bandits as a mathematical programming problem subject to safety constraints and learn the environmental characteristics through changes in optimization inputs and through exploration. We quantify the uncertainty in unknown characteristics by using Gaussian processes and incorporate it into the utility function as a contribution which drives exploration. We adaptively control the size of this contribution using a heuristic in accordance with the requirements of the environment. We guarantee the safety of our algorithm with a high probability through confidence bounds constructed under the regularity assumptions of Gaussian processes. Compared to existing safe-learning approaches, our algorithm does not require an exclusive exploration phase and follows the optimization goals even in the explored points, which makes it suitable for safety-critical systems. We demonstrate the safety and efficiency of our approach with two experiments: an industrial process and an online bid optimization benchmark problem.
translated by 谷歌翻译
In this paper, negatively inclined buoyant jets, which appear during the discharge of wastewater from processes such as desalination, are observed. To minimize harmful effects and assess environmental impact, a detailed numerical investigation is necessary. The selection of appropriate geometry and working conditions for minimizing such effects often requires numerous experiments and numerical simulations. For this reason, the application of machine learning models is proposed. Several models including Support Vector Regression, Artificial Neural Networks, Random Forests, XGBoost, CatBoost and LightGBM were trained. The dataset was built with numerous OpenFOAM simulations, which were validated by experimental data from previous research. The best prediction was obtained by Artificial Neural Network with an average of R2 0.98 and RMSE 0.28. In order to understand the working of the machine learning model and the influence of all parameters on the geometrical characteristics of inclined buoyant jets, the SHAP feature interpretation method was used.
translated by 谷歌翻译
Large language models (LLMs) have been reported to have strong performance on natural language processing tasks. However, performance metrics such as accuracy do not measure the quality of the model in terms of its ability to robustly represent complex linguistic structure. In this work, we propose a framework to evaluate the robustness of linguistic representations using probing tasks. We leverage recent advances in extracting emergent linguistic constructs from LLMs and apply syntax-preserving perturbations to test the stability of these constructs in order to better understand the representations learned by LLMs. Empirically, we study the performance of four LLMs across six different corpora on the proposed robustness measures. We provide evidence that context-free representation (e.g., GloVe) are in some cases competitive with context-dependent representations from modern LLMs (e.g., BERT), yet equally brittle to syntax-preserving manipulations. Emergent syntactic representations in neural networks are brittle, thus our work poses the attention on the risk of comparing such structures to those that are object of a long lasting debate in linguistics.
translated by 谷歌翻译